93 research outputs found
A Convolutional Encoder Model for Neural Machine Translation
The prevalent approach to neural machine translation relies on bi-directional
LSTMs to encode the source sentence. In this paper we present a faster and
simpler architecture based on a succession of convolutional layers. This allows
to encode the entire source sentence simultaneously compared to recurrent
networks for which computation is constrained by temporal dependencies. On
WMT'16 English-Romanian translation we achieve competitive accuracy to the
state-of-the-art and we outperform several recently published results on the
WMT'15 English-German task. Our models obtain almost the same accuracy as a
very deep LSTM setup on WMT'14 English-French translation. Our convolutional
encoder speeds up CPU decoding by more than two times at the same or higher
accuracy as a strong bi-directional LSTM baseline.Comment: 13 page
Classical Structured Prediction Losses for Sequence to Sequence Learning
There has been much recent work on training neural attention models at the
sequence-level using either reinforcement learning-style methods or by
optimizing the beam. In this paper, we survey a range of classical objective
functions that have been widely used to train linear models for structured
prediction and apply them to neural sequence to sequence models. Our
experiments show that these losses can perform surprisingly well by slightly
outperforming beam search optimization in a like for like setup. We also report
new state of the art results on both IWSLT'14 German-English translation as
well as Gigaword abstractive summarization. On the larger WMT'14 English-French
translation task, sequence-level training achieves 41.5 BLEU which is on par
with the state of the art.Comment: 10 pages, NAACL 201
- …